12 research outputs found

    Boundaries of Semantic Distraction: Dominance and Lexicality Act at Retrieval

    Get PDF
    Three experiments investigated memory for semantic information with the goal of determining boundary conditions for the manifestation of semantic auditory distraction. Irrelevant speech disrupted the free recall of semantic category-exemplars to an equal degree regardless of whether the speech coincided with presentation or test phases of the task (Experiment 1) and occurred regardless of whether it comprised random words or coherent sentences (Experiment 2). The effects of background speech were greater when the irrelevant speech was semantically related to the to-be-remembered material, but only when the irrelevant words were high in output dominance (Experiment 3). The implications of these findings in relation to the processing of task material and the processing of background speech is discussed

    Phonemes:Lexical access and beyond

    Get PDF

    The Role of Spectral and Temporal Cues in Voice Gender Discrimination by Normal-Hearing Listeners and Cochlear Implant Users

    No full text
    The present study investigated the relative importance of temporal and spectral cues in voice gender discrimination and vowel recognition by normal-hearing subjects listening to an acoustic simulation of cochlear implant speech processing and by cochlear implant users. In the simulation, the number of speech processing channels ranged from 4 to 32, thereby varying the spectral resolution; the cutoff frequencies of the channels’ envelope filters ranged from 20 to 320 Hz, thereby manipulating the available temporal cues. For normal-hearing subjects, results showed that both voice gender discrimination and vowel recognition scores improved as the number of spectral channels was increased. When only 4 spectral channels were available, voice gender discrimination significantly improved as the envelope filter cutoff frequency was increased from 20 to 320 Hz. For all spectral conditions, increasing the amount of temporal information had no significant effect on vowel recognition. Both voice gender discrimination and vowel recognition scores were highly variable among implant users. The performance of cochlear implant listeners was similar to that of normal-hearing subjects listening to comparable speech processing (4–8 spectral channels). The results suggest that both spectral and temporal cues contribute to voice gender discrimination and that temporal cues are especially important for cochlear implant users to identify the voice gender when there is reduced spectral resolution

    Voice processing and voice-identity recognition

    No full text
    The human voice is the most important sound source in our environment, not only because it produces speech, but also because it conveys information about the speaker. In many situations, listeners understand the speech message and recognize the speaker with minimal effort. Psychophysical studies have investigated which voice qualities (such as vocal timbre) distinguish speakers and allow listeners to recognize speakers. Glottal and vocal tract characteristics strongly influence perceived similarity between speakers and serve as cues for voice-identity recognition. However, the importance of a particular voice quality for voice-identity recognition depends on the speaker and the stimulus. Voice-identity recognition relies on a network of brain regions comprising a core system of auditory regions within the temporal lobe (including regions dedicated to processing glottal and vocal tract characteristics and regions that play more abstract roles) and an extended system of nonauditory regions representing information associated with specific voice identities (e.g., faces and names). This brain network is supported by early, direct connections between the core voice system and an analogous core face system. Precisely how all these brain regions work together to accomplish voice-identity recognition remains an open question; answering it will require rigorous testing of hypotheses derived from theoretical accounts of voice processing
    corecore